75 research outputs found

    4-Dimensional deformation part model for pose estimation using Kalman filter constraints

    Full text link
    [EN] The goal of this research work is to improve the accuracy of human pose estimation using the deformation part model without increasing computational complexity. First, the proposed method seeks to improve pose estimation accuracy by adding the depth channel to deformation part model, which was formerly defined based only on RGB channels, to obtain a 4-dimensional deformation part model. In addition, computational complexity can be controlled by reducing the number of joints by taking into account in a reduced 4-dimensional deformation part model. Finally, complete solutions are obtained by solving the omitted joints by using inverse kinematic models. The main goal of this article is to analyze the effect on pose estimation accuracy when using a Kalman filter added to 4-dimensional deformation part model partial solutions. The experiments run with two data sets showing that this method improves pose estimation accuracy compared with state-of-the-art methods and that a Kalman filter helps to increase this accuracy.The author(s) disclosed receipt of the following financial support for the research, authorship, and/or publication of this article: This work was partially financed by Plan Nacional de I + D, Comision Interministerial de Ciencia y Tecnologa (FEDERCICYT) under the project DPI2013-44227-R.Martínez Bertí, E.; Sánchez Salmerón, AJ.; Ricolfe Viala, C. (2017). 4-Dimensional deformation part model for pose estimation using Kalman filter constraints. International Journal of Advanced Robotic Systems. 14(3):1-13. https://doi.org/10.1177/1729881417714230S11314

    Efficient lens distortion correction for decoupling in calibration of wide angle lens cameras

    Full text link
    © 2013 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.In photogrammetry applications, camera parameters must be as accurate as possible to avoid deviations in measurements from images. Errors increase if wide angle lens cameras are used. Moreover, the coupling between intrinsic and extrinsic camera parameters and the lens distortion model influences the result of the calibration process notably. This paper proposes a method for calibrating wide angle lens cameras, which takes into account the existing hard coupling. The proposed method obtains stable results, which do not depend on how the image lens distortion is corrected.This work was supported in part by the Universidad Politecnica de Valencia research funds (PAID 2010-2431 and PAID 10017), the Generalitat Valenciana (GV/2011/057) and the Spanish government and the European Community under Project DPI2010-20814-C02-02 (FEDER-CICYT) and Project DPI2010-20286 (CICYT). The associate editor coordinating the review of this paper and approving it for publication was Dr. Subhas C. Mukhopadhyay.Ricolfe Viala, C.; Sánchez Salmerón, AJ.; Valera Fernández, Á. (2013). Efficient lens distortion correction for decoupling in calibration of wide angle lens cameras. IEEE Sensors Journal. 13(2):854-863. https://doi.org/10.1109/JSEN.2012.2229704S85486313

    Multi-step approach for automated scaling of photogrammetric micro-measurements

    Full text link
    [EN] Photogrammetry can be used for the measurement of small objects with micro-features, with good results, low costs, and the possible addition of texture information to the 3D models. The performance of this technique is strongly affected by the scaling method, since it retrieves a model that must be scaled after its elaboration. In this paper, a fully automated multi-step scaling system is presented, which is based on machine vision algorithms for retrieving blurred areas. This method allows researchers to find the correct scale factor for a photogrammetric micro model and is experimentally compared to the existing manual method basing on the German guideline VDI/VDE 2634, Part 3. The experimental tests are performed on millimeter-sized certified workpieces, finding micrometric errors, when referred to reference measurements. As a consequence, the method is candidate to be used for measurements of micro-features. The proposed tool improves the performance of the manual method by eliminating operator-dependent procedures. The software tool is available online as supplementary material and represents a powerful tool to face scaling issues of micro-photogrammetric activities.Frangione, A.; Sánchez Salmerón, AJ.; Modica, F.; Percoco, G. (2019). Multi-step approach for automated scaling of photogrammetric micro-measurements. The International Journal of Advanced Manufacturing Technology. 102(1-4):747-757. https://doi.org/10.1007/s00170-018-03258-w7477571021-

    Detección automática de diferencias entre imágenes para estimación de la irritación de productos mediante HET-CAM

    Get PDF
    [Resumen] HET-CAM es una técnica in vitro utilizada para realizar una estimación de la irritación ocular que produciría cualquier sustancia química, o mezcla de ellas, en el ojo (mucosas) humano. Esta técnica, se basa en el análisis manual de los cambios producidos en las venas en distintos instantes de tiempo tras la aplicación de un producto sobre la membrana de un huevo fecundado. Dicho análisis se lleva a cabo por un experto que debe revisar y apuntar durante un periodo determinado de tiempo los cambios que hayan ocurrido. Con el fin de automatizar este proceso, en este trabajo se propone una primera aproximación a la detección automática de los cambios en la distribución de la sangre producidos en el huevo. En concreto, se proponen técnicas de procesado y segmentación de las imágenes que facilitan el proceso de monitorización realizado por los expertos.https://doi.org/10.17979/spudc.978849749808

    Reducing Results Variance in Lifespan Machines: An Analysis of the Influence of Vibrotaxis on Wild-Type Caenorhabditis elegans for the Death Criterion

    Full text link
    [EN] Nowadays, various artificial vision-based machines automate the lifespan assays of C. elegans. These automated machines present wider variability in results than manual assays because in the latter worms can be poked one by one to determine whether they are alive or not. Lifespan machines normally use a "dead or alive criterion" based on nematode position or pose changes, without poking worms. However, worms barely move on their last days of life, even though they are still alive. Therefore, a long monitoring period is necessary to observe motility in order to guarantee worms are actually dead, or a stimulus to prompt worm movement is required to reduce the lifespan variability measure. Here, a new automated vibrotaxis-based method for lifespan machines is proposed as a solution to prompt a motion response in all worms cultured on standard Petri plates in order to better distinguish between live and dead individuals. This simple automated method allows the stimulation of all animals through the whole plate at the same time and intensity, increasing the experiment throughput. The experimental results exhibited improved live-worm detection using this method, and most live nematodes (>93%) reacted to the vibration stimulus. This method increased machine sensitivity by decreasing results variance by approximately one half (from +/- 1 individual error per plate to +/- 0.6) and error in lifespan curve was reduced as well (from 2.6% to 1.2%).This study was also supported by the Universitat Politecnica de Valencia with Project 20170020-UPV, Plan Nacional de I+D with Project RTI2018-094312-B-I00 and by European FEDER funds. ADM Nutrition, Biopolis SL and Archer Daniels Midland provided support in the supply of C. elegans.Puchalt-Rodríguez, JC.; Layana-Castro, PE.; Sánchez Salmerón, AJ. (2020). Reducing Results Variance in Lifespan Machines: An Analysis of the Influence of Vibrotaxis on Wild-Type Caenorhabditis elegans for the Death Criterion. Sensors. 20(21):1-17. https://doi.org/10.3390/s20215981S117202

    Improving skeleton algorithm for helping Caenorhabditis elegans trackers

    Full text link
    [EN] One of the main problems when monitoring Caenorhabditis elegans nematodes (C. elegans) is tracking their poses by automatic computer vision systems. This is a challenge given the marked flexibility that their bodies present and the different poses that can be performed during their behaviour individually, which become even more complicated when worms aggregate with others while moving. This work proposes a simple solution by combining some computer vision techniques to help to determine certain worm poses and to identify each one during aggregation or in coiled shapes. This new method is based on the distance transformation function to obtain better worm skeletons. Experiments were performed with 205 plates, each with 10, 15, 30, 60 or 100 worms, which totals 100,000 worm poses approximately. A comparison of the proposed method was made to a classic skeletonisation method to find that 2196 problematic poses had improved by between 22% and 1% on average in the pose predictions of each worm.This study was supported by the Plan Nacional de I+D with Project RTI2018-094312-B-I00 and by European FEDER funds. ADM Nutrition, Biopolis S.L. and Archer Daniels Midland supplied the C. elegans plates. Some strains were provided by the CGC, which is funded by NIH Office of Research Infrastructure Programs (P40 OD010440). Mrs. Maria-Gabriela Salazar-Secada developed the skeleton annotation application. Mr. Jordi Tortosa-Grau annotated worm skeletons.Layana-Castro, PE.; Puchalt-Rodríguez, JC.; Sánchez Salmerón, AJ. (2020). Improving skeleton algorithm for helping Caenorhabditis elegans trackers. Scientific Reports. 10(1):1-12. https://doi.org/10.1038/s41598-020-79430-8S112101Teo, E. et al. A high throughput drug screening paradigm using transgenic Caenorhabditis elegans model of Alzheimer’s disease. Transl. Med. Aging 4, 11–21. https://doi.org/10.1016/j.tma.2019.12.002 (2020).Kim, M., Knoefler, D., Quarles, E., Jakob, U. & Bazopoulou, D. Automated phenotyping and lifespan assessment of a C. elegans model of Parkinson’s disease. Transl. Med. Aging 4, 38–44. https://doi.org/10.1016/j.tma.2020.04.001 (2020).Olsen, A. & Gill, M. S. (eds) Ageing: Lessons from C. elegans (Springer, Berlin, 2017).Wählby, C. et al. An image analysis toolbox for high-throughput C. elegans assays. Nat. Methods 9, 714–6. https://doi.org/10.1038/nmeth.1984 (2012).Rizvandi, N. B., Pižurica, A., Rooms, F. & Philips, W. Skeleton analysis of population images for detection of isolated and overlapped nematode C. elegans. In 2008 16th European Signal Processing Conference, 1–5 (2008).Rizvandi, N. B., Pizurica, A. & Philips, W. Machine vision detection of isolated and overlapped nematode worms using skeleton analysis. In 2008 15th IEEE International Conference on Image Processing, 2972–2975. https://doi.org/10.1109/ICIP.2008.4712419 (2008).Uhlmann, V. & Unser, M. Tip-seeking active contours for bioimage segmentation. In 2015 IEEE 12th International Symposium on Biomedical Imaging (ISBI), 544–547 (2015).Nagy, S., Goessling, M., Amit, Y. & Biron, D. A generative statistical algorithm for automatic detection of complex postures. PLOS Comput. Biol. 11, 1–23. https://doi.org/10.1371/journal.pcbi.1004517 (2015).Huang, K.-M., Cosman, P. & Schafer, W. R. Machine vision based detection of omega bends and reversals in C. elegans. J. Neurosci. Methods 158, 323–336. https://doi.org/10.1016/j.jneumeth.2006.06.007 (2006).Kiel, M. et al. A multi-purpose worm tracker based on FIM. https://doi.org/10.1101/352948 (2018).Winter, P. B. et al. A network approach to discerning the identities of C. elegans in a free moving population. Sci. Rep. 6, 34859. https://doi.org/10.1038/srep34859 (2016).Fontaine, E., Burdick, J. & Barr, A. Automated tracking of multiple C. Elegans. In 2006 International Conference of the IEEE Engineering in Medicine and Biology Society, 3716–3719. https://doi.org/10.1109/IEMBS.2006.260657 (2006).Roussel, N., Morton, C. A., Finger, F. P. & Roysam, B. A computational model for C. elegans locomotory behavior: application to multiworm tracking. IEEE Trans. Biomed. Eng. 54, 1786–1797. https://doi.org/10.1109/TBME.2007.894981 (2007).Hebert, L., Ahamed, T., Costa, A. C., O’Shaugnessy, L. & Stephens, G. J. Wormpose: image synthesis and convolutional networks for pose estimation in C. elegans. bioRxiv. https://doi.org/10.1101/2020.07.09.193755 (2020).Chen, L. et al. A CNN framework based on line annotations for detecting nematodes in microscopic images. In 2020 IEEE 17th International Symposium on Biomedical Imaging (ISBI), 508–512. https://doi.org/10.1109/ISBI45749.2020.9098465 (2020).Li, S. et al. Deformation-aware unpaired image translation for pose estimation on laboratory animals. In 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 13155–13165. https://doi.org/10.1109/CVPR42600.2020.01317 (2020).Puchalt, J. C., Sánchez-Salmerón, A.-J., Martorell Guerola, P. & Genovés Martínez, S. Active backlight for automating visual monitoring: an analysis of a lighting control technique for Caenorhabditis elegans cultured on standard petri plates. PLOS ONE 14, 1–18. https://doi.org/10.1371/journal.pone.0215548 (2019).Stiernagle, T. Maintenance of C. elegans. https://doi.org/10.1895/wormbook.1.101.1 (2006).Russ, J. C. & Neal, F. B. The Image Processing Handbook 7th edn, 479–480 (CRC Press, Boca Raton, 2015).Swierczek, N. A., Giles, A. C., Rankin, C. H. & Kerr, R. A. High-throughput behavioral analysis in C. elegans. Nat. Methods 8, 592–598. https://doi.org/10.1038/nmeth.1625 (2011).Restif, C. et al. CELEST: computer vision software for quantitative analysis of C. elegans swim behavior reveals novel features of locomotion. PLOS Comput. Biol. 10, 1–12. https://doi.org/10.1371/journal.pcbi.1003702 (2014).Javer, A. et al. An open-source platform for analyzing and sharing worm-behavior data. Nat. Methods 15, 645–646. https://doi.org/10.1038/s41592-018-0112-1 (2018).Dusenbery, D. B. Using a microcomputer and video camera to simultaneously track 25 animals. Comput. Biol. Med. 15, 169–175. https://doi.org/10.1016/0010-4825(85)90058-7 (1985).Ramot, D., Johnson, B. E., Berry, T. L. Jr., Carnell, L. & Goodman, M. B. The parallel worm tracker: a platform for measuring average speed and drug-induced paralysis in nematodes. PLOS ONE 3, 1–7. https://doi.org/10.1371/journal.pone.0002208 (2008).Puchalt, J. C. et al. Improving lifespan automation for Caenorhabditis elegans by using image processing and a post-processing adaptive data filter. Sci. Rep. 10, 8729. https://doi.org/10.1038/s41598-020-65619-4 (2020).Rezatofighi, H. et al. Generalized intersection over union: a metric and a loss for bounding box regression. In 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 658–666. https://doi.org/10.1109/CVPR.2019.00075 (2019).Koul, A., Ganju, S. & Kasam, M. Practical Deep Learning for Cloud, Mobile, and Edge: Real-World AI & Computer-Vision Projects Using Python, Keras & TensorFlow, 679–680 (O’Reilly Media, 2019)

    Fall detection based on the gravity vector using a wide-angle camera

    Full text link
    Falls in elderly people are becoming an increasing healthcare problem, since life expectancy and the number of elderly people who live alone have increased over recent decades. If fall detection systems could be installed easily and economically in homes, telecare could be provided to alleviate this problem. In this paper we propose a low cost fall detection system based on a single wide-angle camera. Wide-angle cameras are used to reduce the number of cameras required for monitoring large areas. Using a calibrated video system, two new features based on the gravity vector are introduced for fall detection. These features are: angle between the gravity vector and the line from feet to head of the human and size of the upper body. Additionally, to differentiate between fall events and controlled lying down events the speed of changes in the features is also measured. Our experiments demonstrate that our system is 97% accurate for fall detection. (C) 2014 Elsevier Ltd. All rights reserved.This work was partially financed by Programa Estatal de Investigacion, Desarrollo e Innovacion Orientada a los Retos de la Sociedad (Direccion General de Investigacion Cientifica y Tecnica, Ministerio de Economia y Competitividad) under the project DPI2013-44227-R.Bosch Jorge, M.; Sánchez Salmerón, AJ.; Valera Fernández, Á.; Ricolfe Viala, C. (2014). Fall detection based on the gravity vector using a wide-angle camera. Expert Systems with Applications. 41(17):7980-7986. https://doi.org/10.1016/j.eswa.2014.06.045S79807986411

    Shelf life prediction of expired vacuum-packed chilled smoked salmon based on a KNN tissue segmentation method using hyperspectral images

    Full text link
    Ready-to-eat foods that does not receive a heat treatment before being consumed can be at risk of foodborne hazards and spoilage, so it would be of great interest to have a method for monitoring their safety. This work expands on and enhances previous successfully studies with hyperspectral imaging in the SW-NIR range. Specifically, a k-nearest-neighbours model was developed to classify the salmon tissue into white myocommata stripes (fat) and muscle (lean) tissue. Partial Least Squares models developed confirm that a spatial segmentation should be performed before a shelf life model can be calculated. Employing the fat spectra and only the 7 most correlated wavelengths, a support vector machine model was calculated to classify into days 0, 10, 20, 40 and 60 with 87.2% prediction accuracy. These results make the method developed very promising as a non-destructive method to analyse the shelf life of vacuum-packed chilled smoked salmon fillets.This work has been partially funded by the Instituto Nacional de Investigacion y Tecnologia Agraria y Alimentaria de Espana (INIA - Spanish National Institute for Agriculture and Food Research and Technology) through research project RTA2012-00062-C04-02, support of European FEDER funds and DPI2013-44227-R project.Ivorra Martínez, E.; Sánchez Salmerón, AJ.; Verdú Amat, S.; Barat Baviera, JM.; Grau Meló, R. (2016). Shelf life prediction of expired vacuum-packed chilled smoked salmon based on a KNN tissue segmentation method using hyperspectral images. Journal of Food Engineering. 178:110-116. https://doi.org/10.1016/j.jfoodeng.2016.01.008S11011617

    Towards Lifespan Automation for Caenorhabditis elegans Based on Deep Learning: Analysing Convolutional and Recurrent Neural Networks for Dead or Live Classification

    Full text link
    [EN] The automation of lifespan assays with C. elegans in standard Petri dishes is a challenging problem because there are several problems hindering detection such as occlusions at the plate edges, dirt accumulation, and worm aggregations. Moreover, determining whether a worm is alive or dead can be complex as they barely move during the last few days of their lives. This paper proposes a method combining traditional computer vision techniques with a live/dead C. elegans classifier based on convolutional and recurrent neural networks from low-resolution image sequences. In addition to proposing a new method to automate lifespan, the use of data augmentation techniques is proposed to train the network in the absence of large numbers of samples. The proposed method achieved small error rates (3.54% +/- 1.30% per plate) with respect to the manual curve, demonstrating its feasibility.This study was supported by the Plan Nacional de I + D under the project RTI2018-094312B-I00 and by the European FEDER funds.García-Garví, A.; Puchalt-Rodríguez, JC.; Layana-Castro, PE.; Navarro Moya, F.; Sánchez Salmerón, AJ. (2021). Towards Lifespan Automation for Caenorhabditis elegans Based on Deep Learning: Analysing Convolutional and Recurrent Neural Networks for Dead or Live Classification. Sensors. 21(14):1-17. https://doi.org/10.3390/s21144943117211

    Multiview motion tracking based on a cartesian robot to monitor Caenorhabditis elegans in standard Petri dishes

    Full text link
    [EN] Data from manual healthspan assays of the nematode Caenorhabditis elegans (C. elegans) can be complex to quantify. The first attempts to quantify motor performance were done manually, using the so-called thrashing or body bends assay. Some laboratories have automated these approaches using methods that help substantially to quantify these characteristic movements in small well plates. Even so, it is sometimes difficult to find differences in motor behaviour between strains, and/or between treated vs untreated worms. For this reason, we present here a new automated method that increases the resolution flexibility, in order to capture more movement details in large standard Petri dishes, in such way that those movements are less restricted. This method is based on a Cartesian robot, which enables high-resolution images capture in standard Petri dishes. Several cameras mounted strategically on the robot and working with different fields of view, capture the required C. elegans visual information. We have performed a locomotion-based healthspan experiment with several mutant strains, and we have been able to detect statistically significant differences between two strains that show very similar movement patterns.This work was supported by the research agency of the Spanish Ministry of Science and Innovation under Grant RTI2018-094312-B-I00 (European FEDER funds).Puchalt-Rodríguez, JC.; González-Rojo, JF.; Gómez-Escribano, AP.; Vázquez-Manrique, RP.; Sánchez Salmerón, AJ. (2022). Multiview motion tracking based on a cartesian robot to monitor Caenorhabditis elegans in standard Petri dishes. Scientific Reports. 12(1):1-11. https://doi.org/10.1038/s41598-022-05823-611112
    corecore